35 research outputs found

    Resampling from the past to improve on MCMC algorithms

    Get PDF
    We introduce the idea that resampling from past observations in a Markov Chain Monte Carlo sampler can fasten convergence. We prove that proper resampling from the past does not disturb the limit distribution of the algorithm. We illustrate the method with two examples. The first on a Bayesian analysis of stochastic volatility models and the other on Bayesian phylogeny reconstruction.Monte Carlo methods, Resampling, Stochastic volatility models, Bayesian phylogeny reconstruction.

    An Adaptive Version for the Metropolis Adjusted Langevin Algorithm with a Truncated Drift

    Get PDF
    This paper proposes an adaptive version for the Metropolis adjusted Langevin algorithm with a truncated drift (T-MALA). The scale parameter and the covariance matrix of the proposal kernel of the algorithm are simultaneously and recursively updated in order to reach the optimal acceptance rate of 0:574 (see Roberts and Rosenthal (2001)) and to estimate and use the correlation structure of the target distribution. We develop some convergence results for the algorithm. A simulation example is presented.Markov Chain Monte Carlo, Stochastic approximation algorithms, Metropolis Adjusted Langevin algorithm, geometric rate of convergence.

    Limit theorems for some adaptive MCMC algorithms with subgeometric kernels: Part II

    Full text link
    We prove a central limit theorem for a general class of adaptive Markov Chain Monte Carlo algorithms driven by sub-geometrically ergodic Markov kernels. We discuss in detail the special case of stochastic approximation. We use the result to analyze the asymptotic behavior of an adaptive version of the Metropolis Adjusted Langevin algorithm with a heavy tailed target density.Comment: 34 page

    Estimation of Network structures from partially observed Markov random fields

    Full text link
    We consider the estimation of high-dimensional network structures from partially observed Markov random field data using a penalized pseudo-likelihood approach. We fit a misspecified model obtained by ignoring the missing data problem. We study the consistency of the estimator and derive a bound on its rate of convergence. The results obtained relate the rate of convergence of the estimator to the extent of the missing data problem. We report some simulation results that empirically validate some of the theoretical findings.Comment: 24 pages 1 figur

    Bayesian computation for statistical models with intractable normalizing constants

    Full text link
    This paper deals with some computational aspects in the Bayesian analysis of statistical models with intractable normalizing constants. In the presence of intractable normalizing constants in the likelihood function, traditional MCMC methods cannot be applied. We propose an approach to sample from such posterior distributions. The method can be thought as a Bayesian version of the MCMC-MLE approach of Geyer and Thompson (1992). To the best of our knowledge, this is the first general and asymptotically consistent Monte Carlo method for such problems. We illustrate the method with examples from image segmentation and social network modeling. We study as well the asymptotic behavior of the algorithm and obtain a strong law of large numbers for empirical averages.Comment: 20 pages, 4 figures, submitted for publicatio
    corecore